LSGAN-AT: enhancing malware detector robustness against adversarial examples

نویسندگان

چکیده

Abstract Adversarial Malware Example (AME)-based adversarial training can effectively enhance the robustness of Machine Learning (ML)-based malware detectors against AME. AME quality is a key factor to enhancement. Generative Network (GAN) kind generation method, but existing GAN-based methods have issues inadequate optimization, mode collapse and instability. In this paper, we propose novel approach (denote as LSGAN-AT) ML-based detector Examples, which includes LSGAN module AT module. generate more effective smoother by utilizing brand-new network structures Least Square (LS) loss optimize boundary samples. makes using generated Robust Detector (RMD). Extensive experiment results validate better transferability in terms attacking 6 ML RMD resisting MalGAN black-box attack. The also verify performance recognition rate

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial Examples for Malware Detection

Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations. In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detecti...

متن کامل

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...

متن کامل

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...

متن کامل

Certified Defenses against Adversarial Examples

While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, ...

متن کامل

Ensembling as a Defense Against Adversarial Examples

Adversarial attacks on machine learning systems take two main flavors. First, there are trainingtime attacks, which involve compromising the training data that the system is trained on. Unsurprisingly, machines can misclassify examples, if they are trained on malicious data. Second, there are test-time attacks, which involve crafting an adversarial example, which a human would easily classify a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Cybersecurity

سال: 2021

ISSN: ['2523-3246']

DOI: https://doi.org/10.1186/s42400-021-00102-9